gradient descent Search Results


90
Stiefel gradient descent algorithm
Gradient Descent Algorithm, supplied by Stiefel, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/gradient descent algorithm/product/Stiefel
Average 90 stars, based on 1 article reviews
gradient descent algorithm - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

90
Brakke Consulting gradient descent
Gradient Descent, supplied by Brakke Consulting, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/gradient descent/product/Brakke Consulting
Average 90 stars, based on 1 article reviews
gradient descent - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

90
Takeda gradient descent steps
Gradient Descent Steps, supplied by Takeda, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/gradient descent steps/product/Takeda
Average 90 stars, based on 1 article reviews
gradient descent steps - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

90
Statcom Co Ltd d-statcom with gradient descent back propagation (gdbp)
D Statcom With Gradient Descent Back Propagation (Gdbp), supplied by Statcom Co Ltd, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/d-statcom with gradient descent back propagation (gdbp)/product/Statcom Co Ltd
Average 90 stars, based on 1 article reviews
d-statcom with gradient descent back propagation (gdbp) - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

90
Stiefel riemannian gradient descent technique
(left) Mean squared error for different TT-ranks, using both the <t>Riemannian</t> formulation (3) and the approximate Stiefel formulation (4). (center) Effect of TT-rank on per iteration runtime of both methods. OTT is significantly faster (10x) than the Riemannian formulation. (right) Memory Dependence of both TT and OTT constructions as a function of rank. The OTT formulation allows for models roughly double the size of TT.
Riemannian Gradient Descent Technique, supplied by Stiefel, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/riemannian gradient descent technique/product/Stiefel
Average 90 stars, based on 1 article reviews
riemannian gradient descent technique - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

90
Nonlinear Dynamics gradient descent method
(left) Mean squared error for different TT-ranks, using both the <t>Riemannian</t> formulation (3) and the approximate Stiefel formulation (4). (center) Effect of TT-rank on per iteration runtime of both methods. OTT is significantly faster (10x) than the Riemannian formulation. (right) Memory Dependence of both TT and OTT constructions as a function of rank. The OTT formulation allows for models roughly double the size of TT.
Gradient Descent Method, supplied by Nonlinear Dynamics, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/gradient descent method/product/Nonlinear Dynamics
Average 90 stars, based on 1 article reviews
gradient descent method - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

90
COMSOL Inc gradient descent method
(left) Mean squared error for different TT-ranks, using both the <t>Riemannian</t> formulation (3) and the approximate Stiefel formulation (4). (center) Effect of TT-rank on per iteration runtime of both methods. OTT is significantly faster (10x) than the Riemannian formulation. (right) Memory Dependence of both TT and OTT constructions as a function of rank. The OTT formulation allows for models roughly double the size of TT.
Gradient Descent Method, supplied by COMSOL Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/gradient descent method/product/COMSOL Inc
Average 90 stars, based on 1 article reviews
gradient descent method - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

90
Gilson Inc gradient descent
(left) Mean squared error for different TT-ranks, using both the <t>Riemannian</t> formulation (3) and the approximate Stiefel formulation (4). (center) Effect of TT-rank on per iteration runtime of both methods. OTT is significantly faster (10x) than the Riemannian formulation. (right) Memory Dependence of both TT and OTT constructions as a function of rank. The OTT formulation allows for models roughly double the size of TT.
Gradient Descent, supplied by Gilson Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/gradient descent/product/Gilson Inc
Average 90 stars, based on 1 article reviews
gradient descent - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

90
Stiefel polak–ribiere conjugate gradient descent
(left) Mean squared error for different TT-ranks, using both the <t>Riemannian</t> formulation (3) and the approximate Stiefel formulation (4). (center) Effect of TT-rank on per iteration runtime of both methods. OTT is significantly faster (10x) than the Riemannian formulation. (right) Memory Dependence of both TT and OTT constructions as a function of rank. The OTT formulation allows for models roughly double the size of TT.
Polak–Ribiere Conjugate Gradient Descent, supplied by Stiefel, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/polak–ribiere conjugate gradient descent/product/Stiefel
Average 90 stars, based on 1 article reviews
polak–ribiere conjugate gradient descent - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

90
Curran Associates Inc wide neural networks of any depth evolve as linear models under gradient descent
(left) Mean squared error for different TT-ranks, using both the <t>Riemannian</t> formulation (3) and the approximate Stiefel formulation (4). (center) Effect of TT-rank on per iteration runtime of both methods. OTT is significantly faster (10x) than the Riemannian formulation. (right) Memory Dependence of both TT and OTT constructions as a function of rank. The OTT formulation allows for models roughly double the size of TT.
Wide Neural Networks Of Any Depth Evolve As Linear Models Under Gradient Descent, supplied by Curran Associates Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/wide neural networks of any depth evolve as linear models under gradient descent/product/Curran Associates Inc
Average 90 stars, based on 1 article reviews
wide neural networks of any depth evolve as linear models under gradient descent - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

90
Immunetrics stochastic gradient descent algorithm
(left) Mean squared error for different TT-ranks, using both the <t>Riemannian</t> formulation (3) and the approximate Stiefel formulation (4). (center) Effect of TT-rank on per iteration runtime of both methods. OTT is significantly faster (10x) than the Riemannian formulation. (right) Memory Dependence of both TT and OTT constructions as a function of rank. The OTT formulation allows for models roughly double the size of TT.
Stochastic Gradient Descent Algorithm, supplied by Immunetrics, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/stochastic gradient descent algorithm/product/Immunetrics
Average 90 stars, based on 1 article reviews
stochastic gradient descent algorithm - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

90
Statistik Georg Ferber functional gradient descent boosting
(left) Mean squared error for different TT-ranks, using both the <t>Riemannian</t> formulation (3) and the approximate Stiefel formulation (4). (center) Effect of TT-rank on per iteration runtime of both methods. OTT is significantly faster (10x) than the Riemannian formulation. (right) Memory Dependence of both TT and OTT constructions as a function of rank. The OTT formulation allows for models roughly double the size of TT.
Functional Gradient Descent Boosting, supplied by Statistik Georg Ferber, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/functional gradient descent boosting/product/Statistik Georg Ferber
Average 90 stars, based on 1 article reviews
functional gradient descent boosting - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

Image Search Results


(left) Mean squared error for different TT-ranks, using both the Riemannian formulation (3) and the approximate Stiefel formulation (4). (center) Effect of TT-rank on per iteration runtime of both methods. OTT is significantly faster (10x) than the Riemannian formulation. (right) Memory Dependence of both TT and OTT constructions as a function of rank. The OTT formulation allows for models roughly double the size of TT.

Journal: Proceedings. IEEE International Conference on Computer Vision

Article Title: Scaling Recurrent Models via Orthogonal Approximations in Tensor Trains

doi: 10.1109/iccv.2019.01067

Figure Lengend Snippet: (left) Mean squared error for different TT-ranks, using both the Riemannian formulation (3) and the approximate Stiefel formulation (4). (center) Effect of TT-rank on per iteration runtime of both methods. OTT is significantly faster (10x) than the Riemannian formulation. (right) Memory Dependence of both TT and OTT constructions as a function of rank. The OTT formulation allows for models roughly double the size of TT.

Article Snippet: We use a Riemannian gradient descent technique on this product of Stiefel manifolds P S . Given { Q i t ( x i ) } as the solution of the t th step, the ( t + 1) th solution, { Q i t + 1 ( x i ) } , can be computed using { Q i t + 1 ( x i ) } = Exp ( { Q i t ( x i ) } , ∂ E ∂ { Q j t ( x j ) } ) , (9) where Exp is the Riemannian Exponential map on P S . On P S , computation of Riemannian Exponential map is not tractable and needs an optimization, hence we use a Riemannian retraction map as proposed in [ 14 ]. summarizes this procedure.

Techniques: Formulation